Image Quilting

In this project I implemented the algorithms from this paper to create texture synthesis and quilting.

Randomly sampled textures

This was pretty easy to do. First I created a function that makes a random sample from an image. for the text texture:

a random sample could look like this:

and then quilted to a 400x400 size with patches of 50x50 it looks like this:

This method of texture synthesis looks pretty bad. It is very obvious that the textures are boxes and you can tell that the output is very blocky and boxy.

Overlapping patches

For this part, I introduced an error component to choosing a sample. The method I used to choose a sample is as follows: First I take a random sample. Then I see what the error over the overlapping area is… If this error is under a certain tolerance (which is emperically chosen, for the text_small image I chose 60) then I return that patch, otherwise I pick another patch. To guarentee a maximum runtime, I also added a maximum number of iterations, which once hit I just choose the patch that I’ve sampled so far that has the minimum error regardless of the tolerance.

This method of choosing patches was gave much better results:

If you were just glancing, you could mistake this synthesized texture for an actual book! That being said, there is still areas where you can see the blockiness… We can do better!

Seam finding

The final improvement to the algorithm was finding a seam between overlapping areas. To do this, I first had to make a cut function that would cut the overlapping region in a way that minimizes the error difference accross the cut. I did this two ways: First, I made a continuous cut from bottom to top of the overlapping region, pretty much seam carving . This was pretty nice, it cut at rough boundaries and everything, but it was SLOW, taking over a minute for even the small images. Eventually, I ended up simply cutting at the minimum cost of every row regardless of if it was continuous, resulting in masks that look like this:

As you can see, the boundary is jagged but it turned out to be MUCH faster (the same images that took over a minute were done in less than 5 seconds) and it had a bit more scalability when the boundaries were larger.

After using this pseudo-seam finding, I was able to get results that honestly look near-perfect:

The blockiness is almost all gone and the lines look conrinuous. The words, of course, don’t make any sense but it looks good overall! Especially compared to the random sampling above.

Here’s some of my results:


Texture transfer

Texture transfer was very simple to implement. The only thing that I changed was a new error term that was the difference from the sample patch and the target image patch. Using my picture:

I made myself appear in a book:

Also, fiddling with the threshold values, I was able to get myself to appear and dissappear from a book:

You can see that adding a new error term made the alignment of words a bit worse.

I also tried to make myself an orange but it jsut ended up making me look like an alien:

Bells and whistles

I created a funky new cut() function for 5 extra points.

Link to Lightfield Camera

Link to AR